Refine your search
Co-Authors
Journals
A B C D E F G H I J K L M N O P Q R S T U V W X Y Z All
Nasira, G. M.
- Prediction of Heart Diseases and Cancer in Diabetic Patients Using Data Mining Techniques
Abstract Views :162 |
PDF Views:0
Authors
Affiliations
1 Karpagam University, Coimbatore-641021, Tamilnad, IN
2 Department of Computer Science, Chikkanna Govt Arts College, Tiruppur-641602, Tamilnadu, IN
1 Karpagam University, Coimbatore-641021, Tamilnad, IN
2 Department of Computer Science, Chikkanna Govt Arts College, Tiruppur-641602, Tamilnadu, IN
Source
Indian Journal of Science and Technology, Vol 8, No 14 (2015), Pagination:Abstract
Background: The heterogeneous, chronic diseases like heart diseases and cancer are commonly occur and increased nowadays in diabetic patients. Most of the people do not know the symptoms of these diseases and its chronic complications. Objective: The aim of this paper is to predict the diseases such as heart diseases and cancer in diabetic patients. The association between these diseases can be analyzed based on the factors that cause these diseases which include obesity, age, associated diabetic duration, and some other life style factors. Methods: This work consists of two stages. In the first stage, the attributes are identified and extracted using Particle Swarm Optimization (PSO) algorithm. In the second stage, ANFIS (Adaptive Neuro Fuzzy Inference System) with Adaptive Group based K-Nearest Neighbor (AGKNN) algorithm has been used to classify the data. Findings: The experimental results show a very good accuracy and signify the ANFIS with AGKNN along with feature subset selection using PSO. The performance is evaluated using performance metrics and proved this classifiers efficiency for the prediction of heart disease and cancer in diabetic patients. Application/ Improvement: This work demonstrates the diagnosis of diseases and its importance to predict it earlier. In future it can be implemented for other related diseases in medical data mining and healthcare.Keywords
Classification, Data Mining, Disease Prediction, Feature Selection, Normalization- Remote Heart Risk Monitoring System based on Efficient Neural Network and Evolutionary Algorithm
Abstract Views :175 |
PDF Views:0
Authors
Affiliations
1 Department of Computer Science, Tiruppur Kumaran College for Women, Bharathiar University, Tirupur - 641687, Tamil Nadu, IN
2 Department of Computer Applications, Chikkanna Government College, Tirupur - 641602, Tamil Nadu, IN
1 Department of Computer Science, Tiruppur Kumaran College for Women, Bharathiar University, Tirupur - 641687, Tamil Nadu, IN
2 Department of Computer Applications, Chikkanna Government College, Tirupur - 641602, Tamil Nadu, IN
Source
Indian Journal of Science and Technology, Vol 8, No 14 (2015), Pagination:Abstract
Objective: The objective of this paper is to predict the risk level of Heart Disease by applying Probabilistic Neural Network trained with Particle Swarm Optimization in case of Remote Health Monitoring. Methods: In order to achieve the aim of the activity, we propose hybrid model of Particle Swarm Optimization (PSO) and Probabilistic Neural Network (PNN). PSO is a population based meta-heuristic Evolutionary Algorithm (EA) whose goal is to explore the search space in order to find near – optimal solutions for feature selection. The optimal features selected can be used for prediction system to develop a classification model using probabilistic Neural Network. Results: First, we quantify the clinical data set from the UCI machine learning repository and measured the complexity. There are 13 attributes are used such as the age which identifies the age of the person, chest pain type has 4 values, serum cholesterol level, blood sugar, resting ECG results, serum cholesterol level, amount of heart rate achieved, x old peak, number of major vessels colored by fluoroscopy, slope of the peak exercise ST segment, thal, sex, height, weight and additional factor smoking. It has been shown that the time complexity of hybridizing PSO and PNN obtained the promising results compared to other two algorithms such as regression tree and PSO optimization. We also proposed the data mining process to deal with complexity, missing values and high dimensionality followed by incorporating the data mining functionalities like characterization, discrimination, association, classification, prediction and evolution analysis. The experiment carried out in Java on stat log heart disease data set performs better in all noise conditions. Conclusion: The performance was evaluated in terms of time complexity, accuracy, sensitivity and specificity and it proved that the hybrid model of PSO and PNN outperformed the Regression tree and PSO.Keywords
Expectation Maximization (EM), Heart Disease, Particle Swarm Optimization (PSO), Probabilistic Neural Network (PNN), Remote Heart Risk Monitoring System (RHRMS)- Exploring Clinical Reasoning in Novices:Knowledge Sharing System Between Social Media and Medical Professionalism
Abstract Views :166 |
PDF Views:2
Authors
R. Vidya
1,
G. M. Nasira
2
Affiliations
1 Post Graduate and Research Department of Computer Science, St. Joseph's College of Arts and Science (Autonomous), Cuddalore-1, IN
2 Department of Computer Science, Chikana Government Arts and Science College, Tirupur, IN
1 Post Graduate and Research Department of Computer Science, St. Joseph's College of Arts and Science (Autonomous), Cuddalore-1, IN
2 Department of Computer Science, Chikana Government Arts and Science College, Tirupur, IN
Source
Data Mining and Knowledge Engineering, Vol 6, No 9 (2014), Pagination: 349-353Abstract
With the power and pervasiveness of the technology, this paper consider whether the technology available now can help the doctors to share and access their knowledge efficiently thus helping to contribute themselves to the Hippocratic Oath of the medical field. Here the technology consideration are on mobile and web tools used in their workplace. To implement the inter-operation in medical informatics domain it is really very necessary to achieve goal of sharing and multiplexing of domain knowledge. This is actually a heterogeneous problem and the best method of resolving is by establishing ontology mapping.Keywords
Mobile, Ontology, Semantic Web, Web Tool.- Rainfall Prediction Using Logistic Regression Technique
Abstract Views :197 |
PDF Views:1
Authors
A. Geetha
1,
G. M. Nasira
2
Affiliations
1 Mother Teresa Women's University, Kodaikanal, IN
2 Dept. of Computer Science, Chikkanna Government Arts College, Tirupur - 2., IN
1 Mother Teresa Women's University, Kodaikanal, IN
2 Dept. of Computer Science, Chikkanna Government Arts College, Tirupur - 2., IN
Source
Artificial Intelligent Systems and Machine Learning, Vol 6, No 7 (2014), Pagination: 246-250Abstract
Data mining is the process of efficient discovery, extracting or mining of knowledge from voluminous amount of data. It extracts hidden predictive information and it is a powerful new technology with great caliber, scope and potential to help in the process of data analysis and for making wise decisions. The increasing demand for accurate weather prediction here rainfall prediction is the need of the hour. This paper concentrates on Meteorological data mining, which focuses on mining meteorological data to give prediction. As a pilot study we sampled the model on existing rainfall data set to predict whether there was rainfall or not using Logistic regression technique.
Keywords
Data Mining, Regression, Logistic Regression Technique, Rapidminer.- Artificial Neural Network Technique, Statistical and FFT in Identifying Defect in Plain Woven Fabric
Abstract Views :208 |
PDF Views:3
Authors
Affiliations
1 Dept. of CSE, Kathir College of Engg., Coimbatore, IN
2 Dept. of CS, Chikkanna Government Arts College, Tirupur, IN
3 Syed Ammal College of Engineering, Ramanathapuram, IN
1 Dept. of CSE, Kathir College of Engg., Coimbatore, IN
2 Dept. of CS, Chikkanna Government Arts College, Tirupur, IN
3 Syed Ammal College of Engineering, Ramanathapuram, IN
Source
Artificial Intelligent Systems and Machine Learning, Vol 6, No 5 (2014), Pagination: 180-183Abstract
Textile industry is one of the main sources of revenue generating industry. The price of fabrics is severely affected by the defects of fabrics that represent a major threat to the textile industry. A very small percentage of defects are detected by the manual inspection even with highly trained, experienced inspectors. An automatic defect detection system can increase the defect detection percentage. It reduces the fabrication cost and economically profitable when we consider the labor cost and associated benefits. In this paper we have proposed a method to detect the defects in woven fabric based on the changes in the intensity of fabric. The images are acquired, preprocessed, statistical features based on the co-occurrence matrix and linear correlation coefficient using fast fourier transform are extracted. The Artificial Neural Network is used as classification model. The extracted features are given as input to the artificial neural network, it identifies the defect. The proposed method shows a better performance when compared with the existing methods.Keywords
Artificial Neural Networks, Image Processing, Statistical Approach, Spectral Approach, Gray Level Co-Occurrence Matrix, Fourier Transform.- Off-Line Handwritten Character Recognition with Hidden Markov Models
Abstract Views :173 |
PDF Views:2
Authors
G. M. Nasira
1,
P. Banumathi
2
Affiliations
1 Sasurie College of Engineering, Vijayamangalam, Tirupur (Dt), IN
2 Department of Computer Science & Engineering, Kathir College of Engineering, Neelambur, Coimbatore (Dt), IN
1 Sasurie College of Engineering, Vijayamangalam, Tirupur (Dt), IN
2 Department of Computer Science & Engineering, Kathir College of Engineering, Neelambur, Coimbatore (Dt), IN
Source
Artificial Intelligent Systems and Machine Learning, Vol 3, No 1 (2011), Pagination: 57-61Abstract
Handwritten Character recognition is a process, which associates a symbolic meaning with letters, symbols and numbers drawn on an image. Many researches have been done to solve handwritten character recognition in the areas such as Image Processing, Pattern Recognition, and Artificial Intelligence etc. Recognition of offline handwritten character is a goal of much research effort in pattern recognition. Many techniques have been applied for recognition of handwritten characters but still it is the case of less efficiency and accuracy of recognition. Thus this paper brings out a complete system to recognize offline handwritten characters using Hidden Markov Model (HMM), in which an artificial neural network is trained to identify similarities and patterns among different handwritten samples with high accuracy. HMM has a Freedom to manipulate the training and verification processes. HMMs are very powerful modeling tools than many statistical methods.Keywords
Artificial Neural Network (ANN), Feature Extraction, Handwritten Character Recognition, Hidden Markov Model (HMM), Preprocessing, Segmentation.- Impregnable Cloud Storage using Surrogate Twofold Encryption Technique (STET) in Cloud
Abstract Views :497 |
PDF Views:345
Authors
Affiliations
1 Research & Development Centre, Bharathiar University, Coimbatore, Tamil Nadu, IN
2 Department of Computer Applications, Chikkanna Government Arts College, Tirupur, Tamil Nadu, IN
1 Research & Development Centre, Bharathiar University, Coimbatore, Tamil Nadu, IN
2 Department of Computer Applications, Chikkanna Government Arts College, Tirupur, Tamil Nadu, IN
Source
ScieXplore: International Journal of Research in Science, Vol 2, No 2 (2015), Pagination: 24-30Abstract
Privacy and security in cloud computing has becoming a challenging task where several techniques used by existing security based only on perimeter level. When an obtrude tries to hack the formatted encryption scheme protected by service provider they could not be hacked. However the data insecurity has no solution to the large extent though the entire process provided by the service provider may not always as right as cloud is a public entity. In our proposed, we enhance several algorithmic techniques chosen randomly applied for the cloud efficient storage and security. There are different service model and distribution model the organization use in cloud and they report the efficiency and correctness of data. The techniques emphasizing authorization models are Surrogate Twofold Encryption Technique (STET). The data from main database (owner’s data) stored in cloud by transferring data contents into a substitute system and encrypting the original data in surrogate system, and then data is re-encrypted inside the cloud thus forming a triple layer protection for the database stored in cloud. Moreover, in arbitrary access control the security enhanced can randomly choose any two encryption technique from four cryptographic algorithms. Those two algorithms selected will be known only to their corresponding system encrypting it. Thus it passes twofold encryption methods and for decrypting it also it needs to pass this twofold decryption method. A flag set synchronized for accessing arbitrary choice of algorithms which promotes a secured algorithmic encryption. To make it more complex, the ciphered information stored in cloud is visible and known only to their corresponding system or well-known authorized user can view or use the data.Keywords
Arbituary Access Control, Encrypting Algorithm, Flag Set, Identity Based, Service Provider, Surrogate Twofold Encryption Technique (STET), Triple Layer Protection.References
- Sahai A., Waters B., “Fuzzy identity-based encryption”, Eurocrypt, 2005.
- Goyal V., Pandey O., Sahai A., Waters B., “Attribute-based encryption for fine-grained access control of encrypted data”, ACM Conference on Computer and Communications Security, 2006.
- Bethencourt J., Sahai A., Waters B., “Ciphertext-policy attribute-based encryption”, IEEE Symposium on Security and Privacy, 2007.
- Waters B., “Ciphertext-policy attribute-based encryption: An expressive, efficient, and provably secure realization”, Public Key Cryptography, 2011.
- Sahai A., Seyalioglu H., Waters B.,“Dynamic credentials and ciphertext delegation for attribute-based encryption”, Crypto, 2012.
- Hohenberger S., Waters B., “Attribute-based encryption with fast decryption”, Public Key Cryptography, 2013.
- Tysowski P.K., Hasan M.A., “Hybrid attribute- and reencryption- based key management for secure and scalable mobile applications in clouds”, IEEE T. Cloud Computing, p. 172–186, 2013.
- Wired., Spam suspect uses google docs; fbi happy. 2014. Available: http://www.wired.com/2010/04/cloud-warrant/.
- Wikiped‑ia. Global surveillance disclosures. 2014. Available: http://en.wikipedia.org/wiki/Global surveillance disclosures (2013-present)
- Snowden E., Available: http://en. wikipedia.org/wiki/ Edward Snowden
- Lavabit. Available: http://en.wikipedia. org/wiki/Lavabit
- Canetti R., Dwork C., Naor M., Ostrovsky R., “Deniable encryption”, Crypto, 1997.
- Lewko A.B., Okamoto T., Sahai A., Takashima K., Waters B., “Fully secure functional encryption: Attribute-based encryption and (hierarchical) inner product encryption”, Eurocrypt, 2010.
- Attrapadung N., Herranz J., Laguillaumie F., Libert B., Panafieu E De, Afols C.R., “Attribute-based encryption schemes with constant-size ciphertexts”, Theor Comput. Sci., vol. 422, 2012.
- Murmuth M.D.,Freeman D.M., “Deniable encryption with negligible detection probability: An interactive construction”, Eurocrypt, 2011.
- Prediction of Cervical Cancer using Hybrid Induction Technique: A Solution for Human Hereditary Disease Patterns
Abstract Views :156 |
PDF Views:0
Authors
R. Vidya
1,
G. M. Nasira
2
Affiliations
1 Department of Computer Science, MS University, Tirunelveli - 627012, Tamil Nadu, IN
2 Department of Computer Science, Chikkanna Government College, Tirupur – 641602, Tamil Nadu, IN
1 Department of Computer Science, MS University, Tirunelveli - 627012, Tamil Nadu, IN
2 Department of Computer Science, Chikkanna Government College, Tirupur – 641602, Tamil Nadu, IN
Source
Indian Journal of Science and Technology, Vol 9, No 30 (2016), Pagination:Abstract
Background/Objective: Cervical Cancer is one among the most vulnerable and highly affected diseases among women around the World. Normally, cells grow and divide to produce more cells only when the body needs them. This orderly process helps to keep dividing when new cells are not needed. These cells may form a mass of extra tissue called a growth of tumor. Tumors can be classified as Benign or Malignant. First Benign Tumors are not cancer. They can usually be removed, and in most cases, they do not show up. Most important, the cells in benign tumors do not spread to other parts of the body. Second, malignant tumors are cancer cells. These tumors can damage nearby tissues and organs. Malignant tumors are threat to life. In this research work, prediction of normal cervix or Cancer cervix is determined with the aid of Powerful Data Mining algorithms. Methods/Statistical Analysis: In this research work, prediction of normal cervix or Cancer cervix is determined with the aid of Powerful Data Mining algorithms. Data mining plays an indispensable role in prediction especially in medical field. Using this concept, Classification and Regression Tree algorithm, Random Forest Tree algorithm and RFT with K-means learning for prediction of normal cervix or cancer cervix is introduced. Collection of data from NCBI (National center for Bio-technology Information) in our work, we used the data set that contains 500 records and 61 variables (i.e. Biopsy numerical value with gene identifier). The output has been presented in the form of prediction tree format. As stated, we selected a sample of 100 records with 61 biopsy features. Based on this biopsy data, an awareness program is conducted and survey is followed up to identify the changes of women during this transition period. To collect data efficiently, a Personal Interview program was conducted among rural women in various places. Collaboration with JIPMER hospital people were checked up for the test of cervical cancer. The results obtained through biopsy test were put through statistical analysis and was given through MATLAB for algorithm testing. To ascertain the results obtained are segregated and delivered in various heads with 100 test data and 60 training data. Findings: Comparison of the performance of various algorithms was used under the techniques in terms of sensitivity, specificity and accuracy to determine the best predictor for the cervical cancer. At first, Regression tree algorithm methodology was used for prediction. The CART binary tree yields two results, either normal cervix or cancer cervix. A Splitting Criterion called GINI index is used to identify the diversity that exists in cervical data. RFT validated optimal accuracy, a new logic was applied i.e. "combinations of two algorithms" is used. It is also an ensemble supervised machine learning algorithm. The process of whitening is used as a pre-process in k-means clustering, to get the best prediction result. The result showed the 83.87% accuracy with CART TREE output. Random Forest Tree (RFT) is used to improve the prediction accuracy. With MATLAB Coding we achieved 93.54% of prediction accuracy. The K-Means algorithm is considered efficient for processing huge datasets and hence a high accuracy of 96.77% is achieved with RFT - K-MEAN LEARNING TREE output. The Randomization of Algorithm is presented in two ways: 1. Bagging for random bootstrap sampling and 2. Input attributes are selected at random for decision tree generation. This creates an unbiased estimate of generalization error as growing of tree into forest progressed and the derivation of time complexity of K-Means is achieved. Applications/Improvements: Cervical cancer diagnosis and prognosis are two medical applications which pose a great challenge to the researchers. The algorithms optimize a cost function defined on the Euclidean distance measure between the data points and means of cluster. Combination of RFT with K-means algorithm is the novelty of our research work, where we have achieved high accuracy result. Accurate prediction of occurrence of cervical cancer has been the most challenging and toughest task in medical data mining because of the non-availability of proper dataset. Many researchers have been done to develop different techniques that can solve problems and improve the prediction accuracy of cervical cancer through images. But in our research work, the prediction of cervical cancer is with Numerical Data. NCBI (National Center for Biotechnology Information) data set has been used. This research paper is a boon to create expert medical decision making systems and a solution for medical practitioners to construct an optimal prediction model for Cervical Cancer Prediction.Keywords
Cervical Cancer, CART, Data Mining, Hereditary Pattern, K-Mean, RFT.- Analyze and Differentiate Uric Acid Stones and Calcium Stones from Images Using Statistical Parameters
Abstract Views :164 |
PDF Views:0
Authors
G. M. Nasira
1,
M. Ranjitha
2
Affiliations
1 Department of Computer Science, Chikkanna Government Arts College, IN
2 Department of Information Technology, CMR Institute of Management Studies, IN
1 Department of Computer Science, Chikkanna Government Arts College, IN
2 Department of Information Technology, CMR Institute of Management Studies, IN
Source
ICTACT Journal on Image and Video Processing, Vol 5, No 3 (2015), Pagination: 980-983Abstract
Image analysis plays a vital role in medical diagnostics. Analysing texture is a major source of discrimination in image analysis. In this paper, we have worked on and analysed images of kidney stones to differentiate between the chemical compositions of different types of stone. The most common types of stones are Calcium and Uric acid stone, hence our study focuses on these two categories. Identifying chemical composition is very crucial as it helps the patients to keep a control on their diet. A statistical comparison is made between these two categories and we have observed significant difference in various classic parameters. A new approach is presented that uses only selected statistical parameters and hence it differs from all previous approaches that differentiates the different types of stones from images without clinical interference.Keywords
Image Analysis, Uric Acid Stones, Calcium Stones, Entropy, Energy.- Wrapper Based Feature Selection for CT Image
Abstract Views :185 |
PDF Views:3
Authors
D. Chitra
1,
G. M. Nasira
2
Affiliations
1 Department of Computer Science, Government Arts College, Salem, IN
2 Department of Computer Science, Chikkanna Government Arts College, IN
1 Department of Computer Science, Government Arts College, Salem, IN
2 Department of Computer Science, Chikkanna Government Arts College, IN
Source
ICTACT Journal on Image and Video Processing, Vol 6, No 1 (2015), Pagination: 1096-1103Abstract
Diagnostic imaging is invaluable. Magnetic Resonance Imaging (MRI), digital mammography, Computed Tomography (CT), and others ensure effective noninvasive mapping of a subject's anatomy, and increased normal and diseased anatomy knowledge for medical research in addition to being a critical component in diagnosis and treatment. In this work various feature selection algorithms are investigated and a Swarm Intelligence Algorithm based on Bacterial Foraging is proposed. Features are extracted using wavelet and Gray-Level Co-occurrence Matrix (GLCM). The obtained features are fused using Median Absolute Deviation (MAD) after normalization and the feature selection techniques investigated. Results obtained show the improved performance of Bacterial Foraging based feature selection for different classifiers.Keywords
Computed Tomography (CT), Wavelet, Gray-Level Co-Occurrence Matrix (GLCM), Median Absolute Deviation (MAD), Correlation Based Feature Selection (CFS), Bacterial Foraging Optimization (BFO).- NDEL based Performance Analysis of Position based Opportunistic Routing Protocols
Abstract Views :181 |
PDF Views:0
Authors
Affiliations
1 M. S. University, Tirunelveli - 627 012, Tamil Nadu, IN
2 Chikkana Arts and Science College, Tirupur - 641 602, Tamil Nadu, IN
1 M. S. University, Tirunelveli - 627 012, Tamil Nadu, IN
2 Chikkana Arts and Science College, Tirupur - 641 602, Tamil Nadu, IN
Source
Indian Journal of Science and Technology, Vol 8, No 23 (2015), Pagination:Abstract
In opportunistic routing protocol focus on the reliability of sending data packets from source to destination using different methods. Mobility forms an adequate challenge in networks. Generally routing protocols travel along trustworthy path and so problem may be created. A relay form of applicant set is selected for sending information. On each communication the hop which satisfies the selected criteria on receiving the packet will promote the data from source to destination. So we propose an efficient power, speed and link stability protocols that use the property of the location based routing and transmit wireless media. Selecting hops forms basic criteria for sending the data the proposed method will consume more energy and accommodating more hosts. We finally give common approach for accepting the nodes to select a path to destination.Keywords
Adhoc Network, Magnet, Position based Routing Protocol, Reliability- Video Steganography Based on Hash Polynomial Function for Secure Communication Use Fourier Transform with Security Method for Mounting of Multilayer Security
Abstract Views :184 |
PDF Views:0
Authors
R. Umadevi
1,
G. M. Nasira
2
Affiliations
1 Department of Computer Science, Periyar University, Salem – 636011, Tamil Nadu, IN
2 Department of Computer Science, Chikkanna Government Arts College, Tirupur – 641602, Tamil Nadu, IN
1 Department of Computer Science, Periyar University, Salem – 636011, Tamil Nadu, IN
2 Department of Computer Science, Chikkanna Government Arts College, Tirupur – 641602, Tamil Nadu, IN
Source
Indian Journal of Science and Technology, Vol 8, No 23 (2015), Pagination:Abstract
Now-a-days, several well-organized data hiding methods have been introduced and implemented using video steganography channel. Data hiding is to accomplish better data communication by hiding information into a video medium carrier to form an unrecognizable code stream. In the consideration of security aspect for communication process especially in video we enhanced this QoS factor security in this paper. Due to heavy packet loss, channel bit errors, video stego analysis have become a significant addition to the security aspects and are extensively developed as a complementary line to conventional security method. In this work, Fourier Transform With Security (FTWS) technique is applied for improve the security on video files in communication process. In this paper, a Secure Hash Polynomial Function to provide multi-layer security according to the modular additions and density functions is presented. Through the analysis of performance factor the security is important factor in our research. Finally, the simulation results evaluated by Video Quality Experts Group with parameter such as security with multilayered for 110 frames using MATLAB. It shows that the method FTWS enhance the security significantly compared with typical state-of-the-art methods.Keywords
Fourier Transform, Hash Polynomial Function, Packet Loss, Security, Video Steganography- Fault Tolerance in Job Scheduling through Fault Management Framework Using SOA in Grid
Abstract Views :242 |
PDF Views:3
Authors
Affiliations
1 Department of Computer Science, Periyar University, IN
2 Department of Computer Science, Chikkanna Government Arts College, IN
1 Department of Computer Science, Periyar University, IN
2 Department of Computer Science, Chikkanna Government Arts College, IN
Source
ICTACT Journal on Soft Computing, Vol 7, No 2 (2017), Pagination: 1381-1385Abstract
The rapid development in computing resources has enhanced the recital of computers and abridged their costs. This accessibility of low cost prevailing computers joined with the fame of the Internet and high-speed networks has leaded the computing surroundings to be mapped from dispersed to grid environments. Grid is a kind of dispersed system which supports the allotment and harmonized exploit of geographically dispersed and multi-owner resources, autonomously from their physical form and site, in vibrant practical organizations that carve up the similar objective of decipher large-scale applications. Thus any type of failure can happen at any point of time and job running in grid environment might fail. Therefore fault tolerance is an imperative and demanding concern in grid computing as the steadiness of individual grid resources may not be guaranteed. In order to build computational grids more effectual and consistent fault tolerant system is required. In order to accomplish the user prospect in terms of recital and competence, the Grid system desires SOA Fault Management Framework for the sharing of tasks with fault tolerance. A Fault Management Framework endeavor to pick up the response time of user's proposed applications by ensures maximal exploitation of obtainable resources. The main aim is to avert, if probable, the stipulation where some processors are congested by means of a set of tasks while others are flippantly loaded or even at leisure.Keywords
Resource Allocation, Job Scheduling, Load Sharing Algorithm, Fault Tolerance, Grid Environment.References
- V. Indhumathi, “Improved Fault Tolerant in Workload Execution through Quality Particle Swarm Optimization for Grid Environment”, Proceedings of International IEEE Conference on Computing for Sustainable Global Development, pp. 5040-5045, 2016.
- V. Indhumathi and G.M. Nasira, “Resource Monitoring and Prediction with Fault Tolerance in Grid Environment through Meta heuristics”, International Journal of Applied Engineering Research, Vol. 10, No. 24, pp. 43993-44000, 2015.
- G.M. Nasira and V. Indhumathi, “Fault Tolerance within Grid Environment using Platform LSF”, Proceedings of National Conference on Soft Computing, 2012.
- G.M. Nasira and V. Indhumathi, “Fault Tolerance within Grid Environment using Platform LSF”, Proceedings of National Conference on Advances in Computer Applications, 2012.
- Elvin Sindrilaru, Alexandru Costan and Valentin Cristea, “Fault Tolerance and Recovery in Grid Workflow Management”, Proceedings of International Conference on Complex, Intelligent and Software Intensive Systems, pp. 475-480, 2010.
- Yuzhong Sun and Zhiwei Xu, “Grid Replication coherence Protocol”, Proceedings of 18th International Symposium on Parallel and Distributed Processing, pp. 232-239, 2004.
- Wei Luo, Xiao Qin, Xian-Chun Tan, Ke Qin and Adam Manzanare, “Exploiting Redundancies to Enhance Schedulability in Fault-Tolerant and Real-Time Distributed Systems”, IEEE Transactions on Systems, Man, and Cybernetics - Part A: Systems and Humans, Vol. 39, No. 3, pp. 626-639, 2009.
- Malarvizhi Nandagopal and Rhymend V Uthariaraj, “Hierarchical Status Information Exchange Scheduling and Load Balancing for Computational Grid Environments”, International Journal of Computer Science and Network Security, Vol. 10, No. 2, pp. 177-185, 2011.
- Jasma Balasangameshwara and Nedunchezhian Raju, “A Hybrid Policy for Fault Tolerant Load Balancing in Grid Computing Environments”, Journal of Network and Computer Applications, Vol. 3, No. 35, pp. 412-422, 2011.
- Jia Yu and Rajkumar Buyya, “A Taxonomy of Workflow Management Systems for Grid Computing”, Journal of Grid Computing, Vol. 3, No. 3, pp. 171-200, 2005.
- Aissatou Diasse and Foroski Kone, “Dynamic-Distributed Load Balancing for Highly-Performance and Responsiveness Distributed-GIS (D-GIS)”, Journal of Geographic Information System, Vol. 3, pp. 128-139, 2011.
- Renato Porfirio Ishii and Rodrigo Fernandes de Mello, “An Adaptive and Historical Approach to Optimize Data Access in Grid Computing Environments”, Infocomp Journal of Computer Science, Vol. 10, No. 2, pp. 26-43, 2011.
- Nhan Nguyen Dang and Sang Boem Lim, “Combination of Replication and Scheduling in Data Grids”, International Journal of Computer Science and Network Security, Vol. 7, No. 3, pp. 304-308, 2007.
- Suriya and Prashanth, “Review of Load Balancing in Cloud Computing”, International Journal of Computer Applications, Vol. 10, No. 1, pp. 35-39, 2013.
- Dimple Juneja and Atul Garg, “Collective Intelligence based Framework for Load Balancing of Web Servers”, International Journal of Advancements in Technology, Vol. 3, No. 1, pp. 64-70, 2012.
- Abhijit and S.S. Apte, “A comparative Performance Analysis of Load Balancing Algorithms in Distributed Systems using Qualitative Parameters”, International Journal of Recent Technology and Engineering, Vol. 1, No. 3, pp. 175-179, 2012.
- Ali M.Alakeel, “A Fuzzy Dynamic Load Balancing Algorithm for Homogeneous Distributed Systems”, World Academy of Science, Engineering and Technology, Vol. 6, No. 1, pp. 7-10, 2012.
- Sarpeet Singh and R.K. Bawa, “Proactive Fault Tolerance Algorithm for Job Scheduling in Computational Grid”, International Journal of Grid and Distributed Computing, Vol. 9, No. 3, pp. 135-144, 2016.
- Walaa Abd Elrouf, Adil Yousif and Mohammed bakri Bashir, “High Exploitation Genetic Algorithm for Job Scheduling on Grid Computing”, International Journal of Grid and Distributed Computing, Vol. 9, No. 3, pp. 212-228, 2016.
- Harkiran Kaur and Mandeep Kaur, “Comparative Study of Various Task Scheduling Algorithms in Grid Computing”, International Journal of Advanced Research in Computer and Communication Engineering, Vol. 5, pp. 843-844, 2016.